122 research outputs found
Recommended from our members
Using latent-semantic analysis and network analysis for monitoring conceptual development
This paper describes and evaluates CONSPECT (from concept inspection), an application that analyses states in a learner’s conceptual development. It was designed to help online learners and their tutors monitor conceptual development and also to help reduce the workload of tutors monitoring a learner’s conceptual development. CONSPECT combines two technologies - Latent Semantic Analysis (LSA) and Network Analysis (NA) into a technique called Meaningful Interaction Analysis (MIA). LSA analyses the meaning in the textual digital traces left behind by learners in their learning journey; NA provides the analytic instrument to investigate (visually) the semantic structures identified by LSA. This paper describes the validation activities undertaken to show how well LSA matches first year medical students in 1) grouping similar concepts and 2) annotating text
Recommended from our members
Monitoring conceptual development with text mining technologies: CONSPECT
This paper evaluates CONSPECT, a service that analyses states in a learner’s conceptual development. It combines two technologies – Latent Semantic Analysis to analyse text and Network Analysis (NA) to provide visualisations – into a technique called Meaningful Interaction Analysis (MIA). CONSPECT was designed to help both online learners and their tutors monitor their conceptual development. This paper reports on the validation experiments undertaken to determine how well LSA matches first year medical students in clustering concepts and in annotating text. The validation used several techniques, including card sorting and Likert scales. CONSPECT produces almost ‘peer’ quality results and what remains to be tested is whether it improves with more advanced learners. One of the experiments showed an average 0.7 correlation between humans and CONSPECT
The conundrum of categorising requirements: managing requirements for learning on the move
This paper reports on the experience of eliciting and managing requirements on a large European-based multinational project, whose purpose is to create a system to support learning using mobile technology. The project used the socio-cognitive engineering methodology for human-centered design and the Volere shell and template to document requirements.
We provide details about the project below, describe the Volere tools, and explain how and why we used a flexible categorization scheme to manage the requirements. Finally, we discuss three lessons learned: (1) provide a flexible mechanism for organizing requirements, (2) plan ahead for the RE process, and (3) do not forget 'the waiting room
Recommended from our members
Applying latent semantic analysis to computer assisted assessment in the Computer Science domain: a framework, a tool, and an evaluation
This dissertation argues that automated assessment systems can be useful for both students and educators provided that the results correspond well with human markers. Thus, evaluating such a system is crucial. I present an evaluation framework and show how and why it can be useful for both producers and consumers of automated assessment systems. The framework is a refinement of a research taxonomy that came out of the effort to analyse the literature review of systems based on Latent Semantic Analysis (LSA), a statistical natural language processing technique that has been used for automated assessment of essays. The evaluation framework can help developers publish their results in a format that is comprehensive, relatively compact, and useful to other researchers.
The thesis claims that, in order to see a complete picture of an automated assessment system, certain pieces must be emphasised. It presents the framework as a jigsaw puzzle whose pieces join together to form the whole picture.
The dissertation uses the framework to compare the accuracy of human markers and EMMA, the LSA-based assessment system I wrote as part of this dissertation. EMMA marks short, free text answers in the domain of computer science. I conducted a study of five human markers and then used the results as a benchmark against which to evaluate EMMA. An integral part of the evaluation was the success metric. The standard inter-rater reliability statistic was not useful; I located a new statistic and applied it to the domain of computer assisted assessment for the first time, as far as I know.
Although EMMA exceeds human markers on a few questions, overall it does not achieve the same level of agreement with humans as humans do with each other. The last chapter maps out a plan for further research to improve EMMA
Recommended from our members
The Learning Grid and E-Assessment using Latent Semantic Analysis
E-assessment is an important component of e-learning and e-qualification. Formative and summative assessment serve different purposes and both types of evaluation are critical to the pedagogicalprocess. While students are studying, practicing, working, or revising, formative assessment provides direction, focus, and guidance. Summative assessment provides the means to evaluate a learner's achievement and communicate that achievement to interested parties. Latent Semantic Analysis (LSA) is a statistical method for inferring meaning from a text. Applications based on LSA exist that provide both summative and formative assessment of a learner's work. However, the huge computational needs are a major problem with this promising technique. This paper explains how LSA works, describes the breadth of existing applications using LSA, explains how LSA is particularly suited to e-assessment, and proposes research to exploit the potential computational power of the Grid to overcome one of LSA's drawbacks
Using language technologies to support individual formative feedback
In modern educational environments for group learning it is often challenging for tutors to provide timely individual formative feedback to learners. Taking the case of undergraduate Medicine, we have found that formative feedback is generally provided to learners on an ad-hoc basis, usually at the group, rather than individual, level. Consequently, conceptual issues for individuals often remain undetected until summative assessment. In many subject domains, learners will typically produce written materials to record their study activities. One way for tutors to diagnose conceptual development issues for an individual learner would be to analyse the contents of the learning materials they produce, which would be a significant undertaking.
CONSPECT is one of six core web-based services of the Language Technologies for Lifelong Learning (LTfLL) project. This European Union Framework 7-funded project seeks to make use of Language Technologies to provide semi-automated analysis of the large quantities of text generated by learners through the course of their learning. CONSPECT aims to provide formative feedback and monitoring of learners’ conceptual development. It uses a Natural Language Processing method, based on Latent Semantic Analysis, to compare learner materials to reference models generated from reference or learning materials.
This paper provides a summary of the service development alongside results from validation of Version 1.0 of the service
An Insight to Project Manager Personality Traits Improving Team Project Outcomes
Individual personality assessments tools have a strong following among Fortune 100 companies.[1] Besides being used for hiring purposes, individual personality assessment tools give project managers insight into personality and aspirations, as well as how they process and organize information, make decisions, and interact with team members and other stakeholders. The aim of this research study was to explore what personality traits project managers need to lead a project team effectively. To accomplish this, we employed the Big Five Personality® and the 1/11 Myers-Briggs (MBTI®) personality assessments to identify favorable personality traits and characteristics when managing projects
- …